Training machine learning classifiers for segmentation is a multi-step process that includes selecting the inputs, adding features to the features tree, training the model, and then reviewing the training results. You can segment datasets after a classifier is trained.
The following items are required for training a machine learning model for segmentation:
The following items are optional for training a machine learning model for segmentation:
As shown in the illustration below, three distinct material phases in a training dataset were labeled as separate classes. You should note that labeling can be done directly on a multi-ROI as of Dragonfly version 2020.1 (see Labeling Multi-ROIs for Segmentation), or on multiple regions of interest, from which you can create a multi-ROI (see Creating Multi-ROIs from Regions of Interest).
The figure below shows an example of segmentation labels provided to the classifier for pixel-based training.
Segmentation labels for pixel-based training
The figure below shows an example of segmentation labels provided to the classifier for region-based training.
Segmentation labels for region-based training
Important Whenever you create segmentation labels for region-based training, you must ensure that the region of interest of one class does not overlap the region of another class. If required, you can generate the regions prior to creating the segmentation labels. Refer to the instructions Generating Regions for information about selecting region generators and generating regions.
You can use any of the segmentation tools available on the ROI Painter and ROI Tools panels to label the voxels of a multi-ROI for training a deep model for semantic segmentation (see ROI Painter Tools and ROI Tools). You should note that training is always done on the image plane and all classes must be labeled on that plane.

The requested number of classes are added, as shown below.


You can also change the zoom factor and position, as well as adjust window leveling to facilitate segmentation (see Using the Manipulate Tools and Window Leveling).
For example, you may need to label voxels that correspond to a specific material type or phase or to an anatomical structure.

Note You can use any of the segmentation tools available on the ROI Painter and ROI Tools panels to label the multi-ROI (see ROI Painter Tools and ROI Tools).
The number of labeled voxels that correspond to each class is indicated in the Voxels column.

Important The multi-ROI does not need to be full segmented.
As an alternative to working directly on a multi-ROI, you can label voxels on multiple regions of interest and then create a multi-ROI from those labeled ROIs (see Creating Multi-ROIs from Regions of Interest).
Refer to the following instructions for information about creating new models for machine learning segmentation..
The Machine Learning Segmentation dialog appears.

Note Refer to Model Details for information about entering a model category and general documentation.
Refer to the following instructions for information about selecting the model inputs, which include the training dataset(s), segmentation labels, and mask(s). See Input Panel for more information about model inputs.
The Input panel appears (see Input Panel).
Do the following to add another dataset:
An additional input appears.
The Labels box is automatically populated with the labels in the multi-ROI.

Note The color and name of each label is saved with the model and appears on the Model panel after the model is trained.
Adding masks, which must include all of the segmentation labels and must be created on a region of interest, can help reduce training times and increase training accuracy. Without a mask, the whole dataset(s) will be used for training.

Refer to the following instructions for information about selecting the model methods, which include the machine learning algorithm, working area, and features tree.
Refer to the following instructions for information about choosing an algorithm for training a classifier (see Algorithms for information about the algorithms available for machine learning segmentation).
The Method panel appears (see Method Panel).
You should note that different algorithms will react differently to the same inputs.
To modify the default settings, check the Advanced box and then choose the required settings.

Note If required, you can save your changes as the default settings for an algorithm by clicking the Save As Default button. Otherwise, your changes will be saved with the model only.
Refer to the following instructions for information about choosing a working area. (see Working Areas).


Note Refer to the topic Generating Regions for information about the available region generators, as well as instructions for computing regions.
Refer to the following instructions for information about added features to a selected working area.
In pixel-based training, the dataset features extracted are the intensity value(s) of the pixel directly. See Dataset Features for more information about the dataset features.
The dataset feature is added to the selected dataset.
Note You can also right-click the required preset in the Dataset Features box and then choose Add to and then select an option in the submenu. You can also choose Add to All to add the selected preset to all datasets in the features tree.

To preview a dataset feature, right-click the feature preset and then choose Preview in the pop-up menu.

The Preview Trainer dialog appears. Refer to the topic Previewing Dataset Features for information about previewing the filters in a dataset feature preset.
When the classifier works on regions and not directly on the pixel level, information is extracted from regions to build the feature vector. The features extracted from the region can be different metrics used to represent the region itself. For example, the histograms of the intensities of the pixels in the given region, or to compare a given region and its surrounding, as is done with the Earth Movers Distance metric. See Region Features for more information about region features.
Regions can be added to any dataset feature in the features tree. As a minimum, you can add the dataset feature Self to the inputs in the features tree when you work with region-based training.
See Adding Dataset Features for information about adding dataset features.
The region feature is added to the selected dataset feature.
Note You can also right-click the required preset in the Region Features box and then choose Add to and then select an option in the submenu. You can also choose Add to All to add the selected preset to all dataset features in the features tree.

To preview a region feature, right-click the feature preset and then choose Preview in the pop-up menu.

The Preview Trainer dialog appears. Refer to the topic Previewing Region Features for information about previewing the filters in a region preset.
You can train a classifier after you have added the required features.
When feature extraction and model fitting is complete, a preview of the classification result appears in the current view, as shown below.

Note You can threshold the results on the Result panel, as well as evaluate the proposed segmentation with the help of a confidence map (see Preview Rendering).
The relevance of each feature preset — on a scale of 0 to 1 and totaling one 1 for all presets — is displayed in the features tree to help you judge whether the feature preset provides helpful information for the classification or not. Features determined to be ineffective can be edited or removed from the features tree.

By default, a fully segmented multi-ROI will be generated and added to the Data Properties and Settings panel. You can also export a confidence map, if required (see Export).